home Today's News Magazine Archives Vendor Guide 2001 Search isdmag.com

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Event Calendar


Resources:
Resources and Seminars
Special Sections


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us





Re-engineering the Art of Timing Sign-Off

By John Croix
Integrated System Design
Posted 06/11/01, 09:56:50 AM EDT

What will it take to be successful as semiconductor designs move below 0.15 µm? Designers who are just now breaking the 0.15-µm design barrier with multimillion-gate designs realize that many of the basic practices of IC design must be re-engineered to ensure success at this new level. At 0.15 µm and below, designers found complex circuit characteristics that, in the past, they could safely approximate or even ignore. As a result, engineering teams today are frantically searching for solutions to address the rising need for more accurate results in the face of increased design complexity. Along with new tools, emerging methods based on new modeling approaches offer the best opportunity for delivering consistent, accurate timing signoff for 0.15-µm designs and below.

As circuit feature size decreases and density increases, designers face a host of additional concerns that complicate their ability to achieve accurate, reliable timing closure. Smaller, tighter features lead to increased capacitive coupling and more variability in wire delay, which typically comes to dominate gate delay. Indeed, designers find they can no longer decouple logic design from physical design, but instead need to deal with complex interactions between logic and interconnect. Furthermore, the trend toward increased design size and layout complexity results in dramatic variations in temperature, supply voltage, and power across chip designs -- adding an additional dimension to the problem.

In the face of these concerns, conventional models of cell electrical behavior and timing performance have broken down. Conventional models provide only static data representations -- typically using an equation or data table that is evaluated by each design tool's internal delay calculation algorithm. When the assumptions used to build the internal equation or data become invalid due to advances in technology, static models can misrepresent actual behavior in leading-edge designs. For example, the dynamic relationship between driving pin, interconnect and sink pin isn't accounted for in conventional static models, potentially leading to significant errors in analysis (see Figure 1).

Furthermore, this static modeling approach leads to inconsistent results across tools, because each application is responsible for the correct interpretation of a static model. Indeed, different tools that use the same static model are able to deliver consistent results only if those tools use the same evaluation algorithm to evaluate the model. Unfortunately, most modern design flows use tools from multiple vendors -- virtually ensuring inconsistent timing and power results.

Active models = data + algorithms

Newer modeling approaches call for the model itself to evaluate core algorithms such as delay calculation -- dynamically providing tools with instance-specific data that accounts for intra-chip variations. These active models combine data and algorithmic content within the model itself -- providing consistent views of the model across all applications. Furthermore, active models provide a means to ensure new types of analyses are introduced uniformly and applied consistently across a design.

Because active models combine both data and algorithmic content into a single unit they can be stored as a UNIX shared-object library, and loaded into the application at runtime. Furthermore, active models can enable guard band reduction by automatically adapting to their environment in an instance-specific manner -- providing tools with data that accounts for specific process, temperature and voltage conditions. Designers can also use active models to exploit advanced algorithms not originally encapsulated in their applications. For example, by making the model aware of IR drop, any application that consumes the model can account for the effects of IR drop, regardless of whether the application was originally intended to do so. Static models, on the other hand, force the library provider to stay with aging data representations that are demonstrably less accurate. Furthermore, new static models can only be distributed and used when every application in the design flow is able to consume the new static data form.

Algorithmic content

What's so important about algorithmic content? Let's take a step back in time a few years and consider the Internet browser. Prior to the introduction and industry adoption of Java, scripting languages such as JavaScript and plug-ins such as Flash, Internet browsers were limited to displaying data-only (HTML) representations of web pages. Each browser was responsible (and still is) for the interpretation and rendering of HTML content. Different browsers, of course, interpreted and displayed the same HTML content differently.

As web pages became more sophisticated, the browser developers created extensions intended to deliver needed functions or display new features. Although some extensions became de facto standards, some could only be consumed by a particular type of browser. Other browsers would either ignore these unknown tags, display a gray box or even crash. In turn, web designers found themselves caught between an urgent need to move beyond static HTML pages and a critical, inherent lack of consistency in displayed results across different types of browsers.

With the introduction of Java, scripts, and plug-ins, however, web page designers found they could now build pages that deliver exactly the function and features they need-regardless of the limitations of static HTML. Indeed, they could combine both HTML data and new algorithms -- and remain confident that anyone using the latest browsers would see the same results.

Instance-specific behavior

Active models are to IC design applications what Java applets are to web pages. Instead of limiting the model's content to static representations that are broadly accepted by all applications in the flow, active models extend the accuracy and utility of an application. By using a standard Application Programming Interface (API) for communication between the application and the model, data can be gathered according to the needs of both the application and the model to satisfy their own algorithmic needs. The application doesn't need to know how the model is calculating the requested values. It only needs to provide data to the model using the published API. Thus, the model writer is free to choose the best representation for the model, both in terms of accuracy and runtime performance.

Consider the case of instance-specific operating point (ISOP) analysis. Most applications apply operating point data, such as voltage or temperature, uniformly across the design being analyzed. In fact, designs typically exhibit marked variations in supply voltage and temperature (see Figures 2a and 2b). Assumptions of uniform over-the-cell routing commonly used to establish static capacitance data for static models quickly break down in real-world designs (see Figure 3). For example, timing analysis will identify a larger number of false violations when using static models than when using ISOP models (see Figure 4). For designers looking to improve the speed and accuracy of timing sign-off, the presence of false failures represents a significant loss in time and effort.

Designers often have to use external point tools to analyze instance-specific variations and back annotate these variations to the application using an external facility such as the standard delay format (SDF). Each operating point analysis means another external application execution and back annotation. When violations are detected, the designer will typically tighten constraints in an attempt to force the application to optimize the design even further. Tightening constraints, however, is a double-edged sword, because constraints are difficult to apply against only the violators: after the application alters the segment that was back annotated, it reverts to its previous representation of cell and interconnect data that produced the incorrect result the first time. In any event, tightening constraints often results in the application spending large amounts of CPU time optimizing paths that aren't critical but that violate these new, tighter constraints.

With ISOP-enabled models, instance-specific variations can be incorporated by the application without the use of an external back annotation facility. The models' algorithms can evaluate the instance-specific conditions when timing and power calculations are made. These variations are then accounted for during construction, analysis, and optimization. The result is that the application uses consistent, accurate data during all phases of execution, producing a better result in a shorter period of time.

Standard active model interface

Active models are only possible if all applications adhere to a single API for activating the model and extracting content. In 1999, the IEEE ratified the Standard for Delay and Power Calculation System (IEEE 1481-1999). It can be used by IC design and analysis tools for the determination of cell and interconnect timing and power characteristics. This standard was originally developed by IBM and promoted by SI2 (Silicon Integration Initiative). Open Library API (OLA) is an extension to this standard that includes functions and properties.

When the standard was first introduced, the only way to create these active models was through the use of a scripting language called Delay Calculation Language (DCL), donated by IBM to SI2 to help in the propagation of active models. Recently Silicon Metrics Corporation introduced its own active model compiler, Open Model Compiler (OMC), which creates active models called SiliconSmart Models (SSMs).

As designers break through the 0.15-µm barrier, increasingly complex physical interactions drive the need for a new infrastructure for modeling the detailed electrical behavior of ICs. This infrastructure must address the mutual opposing requirements of detailed accuracy, high throughput, and capacity for system-on-chip (SoC) designs. Active models provide just such an infrastructure.

Active models bring the silicon vendor's expertise and intellectual property to the engineer's desktop. Furthermore, active models extend the basic functionality of existing tools by incorporating new physical phenomena and algorithms in the models. Consequently, designers are able to account for noise, power and other critical effects that rise in importance below 0.15 µm. As a result of the improved accuracy and consistency afforded by active models, designers are able to achieve timing sign-off faster with reduced guard-banding-which translates directly into savings in die size, power and yield. When used in concert with emerging physical synthesis tools, active models help designers build better chips at a faster clip.

John Croix is co-founder and Chief Technology Officer of Silicon Metrics.

   Print Print this story     e-mail Send as e-mail   Back Home

Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.